A dwindling filter line search method for unconstrained optimization

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A dwindling filter line search method for unconstrained optimization

In this paper, we propose a new dwindling multidimensional filter second-order line search method for solving large-scale unconstrained optimization problems. Usually, the multidimensional filter is constructed with a fixed envelope, which is a strict condition for the gradient vectors. A dwindling multidimensional filter technique, which is a modification and improvement of the original multid...

متن کامل

A Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems

In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...

متن کامل

A Filter-Based Pattern Search Method for Unconstrained Optimization

We discuss a filter-based pattern search method for unconstrained optimization in this paper. For the purpose to broaden the search range we use both filter technique and frames, which are fragments of grids, to provide a new criterion of iterate acceptance. The convergence can be ensured under some conditions. The numerical result shows that this method is practical and efficient.

متن کامل

A memory gradient method without line search for unconstrained optimization

Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration a...

متن کامل

A Regularized Newton Method without Line Search for Unconstrained Optimization∗

In this paper, we propose a regularized Newton method without line search. The proposed method controls a regularized parameter instead of a step size in order to guarantee the global convergence. We demonstrate that it is closely related to the TR-Newton method when the Hessian of the objective function is positive definite. Moreover, it does not solve nonconvex problems but linear equations a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics of Computation

سال: 2014

ISSN: 0025-5718,1088-6842

DOI: 10.1090/s0025-5718-2014-02847-0